Volumetric Object-Level SLAM
Fusion++: Volumetric Object-Level SLAM
Fusion++: Volumetric Object-Level SLAM

We propose an online object-level SLAM system which builds a persistent and accurate 3D graph map of arbitrary reconstructed objects. As an RGB-D camera browses a cluttered indoor scene, Mask-RCNN instance segmentations are used to initialise compact per-object Truncated Signed Distance Function (TSDF) reconstructions with object size-dependent resolutions and a novel 3D foreground mask. Reconstructed objects are stored in an optimisable 6DoF pose graph which is our only persistent map representation. Objects are incrementally refined via depth fusion, and are used for tracking, relocalisation and loop closure detection. Loop closures cause adjustments in the relative pose estimates of object instances, but no intra-object warping. Each object also carries semantic information which is refined over time and an existence probability to account for spurious instance predictions. We demonstrate our approach on a hand-held RGB-D sequence from a cluttered office scene with a large number and variety of object instances, highlighting how the system closes loops and makes good use of existing objects on repeated loops. We quantitatively evaluate the trajectory error of our system against a baseline approach on the RGBD SLAM benchmark, and qualitatively compare reconstruction quality of discovered objects on the YCB video dataset. Performance evaluation shows our approach is highly memory efficient and runs online at 4-8Hz (excluding relocalisation) despite not being optimised at the software level.

John McCormac, Ronald Clark, Michael Bloesch, Stefan Leutenegger, Andrew J. Davison. Fusion++: Volumetric Object-Level SLAM. 3DV, 2018

Contact us

Dyson Robotics Lab at Imperial
William Penney Building
Imperial College London
South Kensington Campus
London
SW7 2AZ

Telephone: +44 (0)20 7594-7756
Email: iosifina.pournara@imperial.ac.uk